Shannon Entropy Rescues the Tuesday Child
My son Alexey Radul and I were discussing the Tuesday’s child puzzle:
You run into an old friend. He has two children, but you do not know their genders. He says, “I have a son born on a Tuesday.” What is the probability that his second child is also a son?
Here is a letter he wrote me on the subject. I liked it because unlike many other discussions, Alexey not only asserts that different interpretations of the conditions in the puzzle form different mathematical problems, but also measures how different they are.
by Alexey Radul
If you assume that boys and girls are symmetric, and that days of the week are symmetric (and if you have no information to the contrary, assuming anything else would be sheer presumption on your part), then you can be in one of at least two states.
1) You say that “at least one son born on a Tuesday” is all the information you have, in which case your distribution including this information is uniform over consistent cases, in which case your answer is 13/27 boy and your information entropy is
− ∑27 (1/27) log(1/27) = − log(1/27) = 3.2958.
2) You say that the information you have is “The guy might have said any true thing of the form ‘I have at least one {boy/girl} born on a {day of the week}’, and he said ‘boy’, ‘Tuesday’.” This is a different mathematical problem with a different solution. The solution: By a symmetry argument (see below [*]) we must assign uniform probability of him making any true statement in any particular situation. Then we proceed by Bayes’ Rule: the statement we heard is d, and for each possible collection of children
− ∑ p log p = − 1/14 log 1/14 − 26/28 log 1/28 = 3.2827
Therefore that assumed additional structure really is more information, which is only present at best implicitly in the original problem. How much more information? The difference in entropies is 3.2958 – 3.2827 = 0.0131 nats (a nat is to a bit what the natural log is to the binary log). How much information is that? Well, the best I can do is to reproduce an argument of E.T. Jaynes’, which may or may not really apply to this situation. Suppose you have some repeatable experiment with some discrete set of possible outcomes, and suppose you assign probabilities to those outcomes. Then the number of ways those probabilities can be realized as frequencies counted over N trials is proportional to eNH, where H is the entropy of the distribution in nats. Which means that the ratio by which one distribution is easier to realize is approximately eN(H1-H2). In the case of N = 1000 and H1 – H2 = 0.0131, that’s circa 5×105. For each way to get a 1000-trial experiment to agree with version 2, there are half a million ways to get a 1000-trial experiment to agree with version 1. So that’s a nontrivial assumption.
[*] The symmetry argument: We are faced with the following probability assignment problem
Suppose our subject’s first child is a boy born on a Tuesday, and his second child is a girl born on a Friday. What probability must we assign to him asserting that he has at least one boy born on a Tuesday?
Good question. Let’s transform our coordinates: Let Tuesday’ be Friday, Friday’ be Tuesday, boy’ be girl, girl’ be boy, first’ be second and second’ be first. Then our problem becomes
Suppose our subject’s second’ child is a girl’ born on a Friday’, and his first’ child is a boy’ born on a Tuesday’. What probability must we assign to him asserting that he has at least one girl’ born on a Friday’?
Our transformation necessitates p(boy Tuesday) = p(girl’ Friday’), and likewise p(girl Friday) = p(boy’ Tuesday’). But our state of complete ignorance about what’s going on with respect to the man’s attitudes about boys, girls, Tuesdays, Fridays, and first and second children has the symmetry that question and question’ are the same question, and must, by the desideratum of consistency, have the same answer. Therefore p(boy Tuesday) = p(boy’ Tuesday’) = p(girl Friday) = 1/2.
Share:
misha:
Another popular interpretation of entropy is the average measure of surprise produced by an outcome of an experiment. Imagine some event of probability p happens. You are surprised by s(p). When another event of probability q happens, you are surprised by s(q). The total surprise is s(p)+s(q). On the other hand, if these events are independent, the probability of both of them happening is pq. We conclude that s(pq)=s(p)+s(q). It means that s(p) is proportional to log(p). Taking the average over all the possible outcomes, we get the entropy formula.
6 July 2010, 12:55 pmChrist Schlacta:
you’re both wrong, genetics dictates that the gender and date of birth of the second child are independent from the first, therefore the probability of his second child being a girl is 50/50 regardless of what and when the first child is born and viceversa. The fact that the first child is a boy born on tuesday is simply extra information thrown in to mislead the problem-solver.
8 January 2011, 5:17 pm